Goto

Collaborating Authors

 intrinsic reward


Maximum-Entropy Exploration with Future State-Action Visitation Measures

Bolland, Adrien, Lambrechts, Gaspard, Ernst, Damien

arXiv.org Machine Learning

Maximum entropy reinforcement learning motivates agents to explore states and actions to maximize the entropy of some distribution, typically by providing additional intrinsic rewards proportional to that entropy function. In this paper, we study intrinsic rewards proportional to the entropy of the discounted distribution of state-action features visited during future time steps. This approach is motivated by two results. First, we show that the expected sum of these intrinsic rewards is a lower bound on the entropy of the discounted distribution of state-action features visited in trajectories starting from the initial states, which we relate to an alternative maximum entropy objective. Second, we show that the distribution used in the intrinsic reward definition is the fixed point of a contraction operator and can therefore be estimated off-policy. Experiments highlight that the new objective leads to improved visitation of features within individual trajectories, in exchange for slightly reduced visitation of features in expectation over different trajectories, as suggested by the lower bound. It also leads to improved convergence speed for learning exploration-only agents. Control performance remains similar across most methods on the considered benchmarks.


Successor-Predecessor Intrinsic Exploration Changmin Y u 1,2 Neil Burgess

Neural Information Processing Systems

Exploration is essential in reinforcement learning, particularly in environments where external rewards are sparse. Here we focus on exploration with intrinsic rewards, where the agent transiently augments the external rewards with self-generated intrinsic rewards.


A Algorithms

Neural Information Processing Systems

We directly adopt the official default setting for Atari games. B.2 Minecraft Environment Settings Table 1 outlines how we set up and initialize the environment for each harvest task. Our method is tested in two different biomes: plains and sunflower plains. Both the plains and sunflower plains offer a wider field of view. In Minecraft, the action space is an 8-dimensional multi-discrete space.





Discovering Creative Behaviors through DUPLEX: Diverse Universal Features for Policy Exploration

Neural Information Processing Systems

The ability to approach the same problem from different angles is a cornerstone of human intelligence that leads to robust solutions and effective adaptation to problem variations. In contrast, current RL methodologies tend to lead to policies that settle on a single solution to a given problem, making them brittle to problem variations. Replicating human flexibility in reinforcement learning agents is the challenge that we explore in this work.


tion error; right: surprise. α is a hyperparameter we scanned for. Implement a new IM baseline: ICM (Pathak 2017 [23]

Neural Information Processing Systems

We thank the reviewers for the thorough feedbacks. Based on those, we have made numerous improvements. Original code is for decrete actions.) IM baseline with the random object. The plot is similar to "tool" in Figure 1 and we omit it due to space constraints. Rev. #1 suggested that the environments could be solved by classic planning methods.